114 research outputs found

    Reasoning about Emotional Agents

    Get PDF
    In this paper we are concerned with reasoning about agents with emotions. To be more precise: we aim at a logical account of emotional agents. The very topic may already raise some eyebrows. Reasoning / rationality and emotions seem opposites, and reasoning about emotions or a logic of emotional agents seems a contradiction in terms. However, emotions and rationality are known to be more interconnected than one may suspect. There is psychological evidence that having emotions may help one to do reasoning and tasks for which rationality seems to be the only factor [1]. Moreover, work by e.g. Sloman [5] shows that one may think of designing agentbased systems where these agents show some kind of emotions, and, even more importantly, display behaviour dependent on their emotional state. It is exactly in this sense that we aim at looking at emotional agents: artificial systems that are designed in such a manner that emotions play a role. Also in psychology emotions are viewed as a structuring mechanism. Emotions are held to help human beings to choose from a myriad of possible actions in response to what happens in ou

    06261 Abstracts Collection -- Foundations and Practice of Programming Multi-Agent Systems

    Get PDF
    From 25.06.06 to 30.06.06, the Dagstuhl Seminar 06261 ``Foundations and Practice of Programming Multi-Agent Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    A Formal Model of Emotions: Integrating Qualitative and Quantitative Aspects

    Get PDF
    When constructing a formal model of emotions for intelligent agents, two types of aspects have to be taken into account. First, qualitative aspects pertain to the conditions that elicit emotions. Second, quantitative aspects pertain to the actual experience and intensity of elicited emotions. In this presentation, we show how the qualitative aspects of a well-known psychological model of human emotions can be formalized in an agent specification language and how its quantitative aspects can be integrated into this model. Furthermore, we discuss several unspecified details and implicit assumptions in the psychological model that are explicated by this effort

    A two-phase method for extracting explanatory arguments from Bayesian networks

    Get PDF
    Errors in reasoning about probabilistic evidence can have severe consequences. In the legal domain a number of recent miscarriages of justice emphasises how severe these consequences can be. These cases, in which forensic evidence was misinterpreted, have ignited a scientific debate on how and when probabilistic reasoning can be incorporated in (legal) argumentation. One promising approach is to use Bayesian networks (BNs), which are well-known scientific models for probabilistic reasoning. For non-statistical experts, however, Bayesian networks may be hard to interpret. Especially since the inner workings of Bayesian networks are complicated, they may appear as black box models. Argumentation models, on the contrary, can be used to show how certain results are derived in a way that naturally corresponds to everyday reasoning. In this paper we propose to explain the inner workings of a BN in terms of arguments. We formalise a two-phase method for extracting probabilistically supported arguments from a Bayesian network. First, from a Bayesian network we construct a support graph, and, second, given a set of observations we build arguments from that support graph. Such arguments can facilitate the correct interpretation and explanation of the relation between hypotheses and evidence that is modelled in the Bayesian network

    Sociotechnical Systems and Ethics in the Large

    Get PDF
    Advances in AI techniques and computing platforms have triggered a lively and expanding discourse on ethical decision-making by autonomous agents. Much recent work in AI concentrates on the challenges of moral decision making from a decision-theoretic perspective, and especially the representation of various ethical dilemmas. Such approaches may be useful but in general are not productive because moral decision making is as context-driven as other forms of decision making, if not more. In contrast, we consider ethics not from the standpoint of an individual agent but of the wider sociotechnical systems (STS) in which the agent operates. Our contribution in this paper is the conception of ethical STS founded on governance that takes into account stakeholder values, normative constraints on agents, and outcomes (states of the STS) that obtain due to actions taken by agents. An important element of our conception is accountability, which is necessary for adequate consideration of outcomes that prima facie appear ethical or unethical. Focusing on STSs avoids the difficult problems of ethics as the norms of the STS give an operational basis for agent decision making
    • ā€¦
    corecore